741 research outputs found

    Tenure, Wage Profiles and Monitoring

    Get PDF
    We investigate the relationship between the slope of the wage-tenure profile and the level of monitoring across two cross sections of matched employer-employee British data. Our theoretical model predicts that increased monitoring leads to a decline in the slope of the wage-tenure profile. Our empirical analysis provides strong support for this prediction.monitoring, tenure, efficiency wages

    Tenure, Wage Profiles and Monitoring

    Get PDF
    Efficiency wage theory predicts that firms can induce worker effort by the carrot of high wages and / or the stick of monitoring worker performance. Another option available to firms is to tilt the remuneration package over time such that the lure of high future earnings acts as a deterrent to current shirking. In this paper we undertake the first empirical investigation of this relationship between the slope of the wage-tenure profile and the level of monitoring. On the assumption that firms strive for the optimal trade-off between these various instruments, we hypothesise that increased monitoring leads to a decline in the slope of the wage-tenure profile. Our empirical analysis, using two cross sections of matched employer-employee British data, provides robust support for this prediction.Monitoring, Tenure, Efficiency, Wages.

    Tenure, Wage Profiles and Monitoring

    Get PDF
    We undertake the first empirical investigation of the relationship between the slope of the wagetenure profile and the level of monitoring. On the assumption that firms strive for the optimal trade-off between these various instruments, we hypothesise that increased monitoring leads to a decline in the slope of the wagetenure profile. Our empirical analysis, using two cross sections of matched employer-employee British data, provides robust support for this prediction.efficiency wages; tenure; monitoring

    Coarse Bifurcation Studies of Bubble Flow Microscopic Simulations

    Full text link
    The parametric behavior of regular periodic arrays of rising bubbles is investigated with the aid of 2-dimensional BGK Lattice-Boltzmann (LB) simulators. The Recursive Projection Method is implemented and coupled to the LB simulators, accelerating their convergence towards what we term coarse steady states. Efficient stability/bifurcation analysis is performed by computing the leading eigenvalues/eigenvectors of the coarse time stepper. Our approach constitutes the basis for system-level analysis of processes modeled through microscopic simulations.Comment: 4 pages, 3 figure

    Dynamic QoS optimization architecture for cloud-based DDDAS

    Get PDF
    Cloud computing urges the need for novel on-demand approaches, where the Quality of Service (QoS) requirements of cloud-based services can dynamically and adaptively evolve at runtime as Service Level Agreement (SLA) and environment changes. Given the unpredictable, dynamic and on-demand nature of the cloud, it would be unrealistic to assume that optimal QoS can be achieved at design time. As a result, there is an increasing need for dynamic and self- adaptive QoS optimization solutions to respond to dynamic changes in SLA and the environment. In this context, we posit that the challenge of self-adaptive QoS optimization encompasses two dynamics, which are related to QoS sensitivity and conflicting objectives at runtime. We propose novel design of a dynamic data-driven architecture for optimizing QoS influenced by those dynamics. The architecture leverages on DDDAS primitives by employing distributed simulations and symbiotic feedback loops, to dynamically adapt decision making metaheuristics, which optimizes for QoS tradeoffs in cloud-based systems. We use a scenario to exemplify and evaluate the approach

    Automated Dynamic Resource Provisioning and Monitoring in Virtualized Large-Scale Datacenter

    Get PDF
    Infrastructure as a Service (IaaS) is a pay-as-you go based cloud provision model which on demand outsources the physical servers, guest virtual machine (VM) instances, storage resources, and networking connections. This article reports the design and development of our proposed innovative symbiotic simulation based system to support the automated management of IaaS-based distributed virtualized data enter. To make the ideas work in practice, we have implemented an Open Stack based open source cloud computing platform. A smart benchmarking application "Cloud Rapid Experimentation and Analysis Tool (aka CBTool)" is utilized to mark the resource allocation potential of our test cloud system. The real-time benchmarking metrics of cloud are fed to a distributed multi-agent based intelligence middleware layer. To optimally control the dynamic operation of prototype data enter, we predefine some custom policies for VM provisioning and application performance profiling within a versatile cloud modeling and simulation toolkit "CloudSim". Both tools for our prototypes' implementation can scale up to thousands of VMs, therefore, our devised mechanism is highly scalable and flexibly be interpolated at large-scale level. Autonomic characteristics of agents aid in streamlining symbiosis among the simulation system and IaaS cloud in a closed feedback control loop. The practical worth and applicability of the multiagent-based technology lies in the fact that this technique is inherently scalable hence can efficiently be implemented within the complex cloud computing environment. To demonstrate the efficacy of our approach, we have deployed an intelligible lightweight representative scenario in the context of monitoring and provisioning virtual machines within the test-bed. Experimental results indicate notable improvement in the resource provision profile of virtualized data enter on incorporating our proposed strategy

    Design and evaluation of parallel hashing over large-scale data

    Get PDF
    High-performance analytical data processing systems often run on servers with large amounts of memory. A common data structure used in such environment is the hash tables. This paper focuses on investigating efficient parallel hash algorithms for processing large-scale data. Currently, hash tables on distributed architectures are accessed one key at a time by local or remote threads while shared-memory approaches focus on accessing a single table with multiple threads. A relatively straightforward “bulk-operation” approach seems to have been neglected by researchers. In this work, using such a method, we propose a high-level parallel hashing framework, Structured Parallel Hashing, targeting efficiently processing massive data on distributed memory. We present a theoretical analysis of the proposed method and describe the design of our hashing implementations. The evaluation reveals a very interesting result - the proposed straightforward method can vastly outperform distributed hashing methods and can even offer performance comparable with approaches based on shared memory supercomputers which use specialized hardware predicates. Moreover, we characterize the performance of our hash implementations through extensive experiments, thereby allowing system developers to make a more informed choice for their high-performance applications

    Exact-Differential Large-Scale Traffic Simulation

    Get PDF
    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) a key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation
    corecore